There still remains an extreme performance gap between Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) when training from scratch on small datasets, which is concluded to the lack of inductive bias. In this paper, we further consider this problem and point out two weaknesses of ViTs in inductive biases, that is, the spatial relevance and diverse channel representation. First, on spatial aspect, objects are locally compact and relevant, thus fine-grained feature needs to be extracted from a token and its neighbors. While the lack of data hinders ViTs to attend the spatial relevance. Second, on channel aspect, representation exhibits diversity on different channels. But the scarce data can not enable ViTs to learn strong enough representation for accurate recognition. To this end, we propose Dynamic Hybrid Vision Transformer (DHVT) as the solution to enhance the two inductive biases. On spatial aspect, we adopt a hybrid structure, in which convolution is integrated into patch embedding and multi-layer perceptron module, forcing the model to capture the token features as well as their neighboring features. On channel aspect, we introduce a dynamic feature aggregation module in MLP and a brand new "head token" design in multi-head self-attention module to help re-calibrate channel representation and make different channel group representation interacts with each other. The fusion of weak channel representation forms a strong enough representation for classification. With this design, we successfully eliminate the performance gap between CNNs and ViTs, and our DHVT achieves a series of state-of-the-art performance with a lightweight model, 85.68% on CIFAR-100 with 22.8M parameters, 82.3% on ImageNet-1K with 24.0M parameters. Code is available at https://github.com/ArieSeirack/DHVT.
translated by 谷歌翻译
We aim to bridge the gap between our common-sense few-sample human learning and large-data machine learning. We derive a theory of human-like few-shot learning from von-Neuman-Landauer's principle. modelling human learning is difficult as how people learn varies from one to another. Under commonly accepted definitions, we prove that all human or animal few-shot learning, and major models including Free Energy Principle and Bayesian Program Learning that model such learning, approximate our theory, under Church-Turing thesis. We find that deep generative model like variational autoencoder (VAE) can be used to approximate our theory and perform significantly better than baseline models including deep neural networks, for image recognition, low resource language processing, and character recognition.
translated by 谷歌翻译
Deep neural networks (DNNs) are often used for text classification tasks as they usually achieve high levels of accuracy. However, DNNs can be computationally intensive with billions of parameters and large amounts of labeled data, which can make them expensive to use, to optimize and to transfer to out-of-distribution (OOD) cases in practice. In this paper, we propose a non-parametric alternative to DNNs that's easy, light-weight and universal in text classification: a combination of a simple compressor like gzip with a $k$-nearest-neighbor classifier. Without any training, pre-training or fine-tuning, our method achieves results that are competitive with non-pretrained deep learning methods on six in-distributed datasets. It even outperforms BERT on all five OOD datasets, including four low-resource languages. Our method also performs particularly well in few-shot settings where labeled data are too scarce for DNNs to achieve a satisfying accuracy.
translated by 谷歌翻译
Traditional deep learning compilers rely on heuristics for subgraph generation, which impose extra constraints on graph optimization, e.g., each subgraph can only contain at most one complex operator. In this paper, we propose AGO, a framework for graph optimization with arbitrary structures to boost the inference performance of deep models by removing such constraints. To create new optimization opportunities for complicated subgraphs, we propose intensive operator fusion, which can effectively stitch multiple complex operators together for better performance. Further, we design a graph partitioning scheme that allows an arbitrary structure for each subgraph while guaranteeing the acyclic property among all generated subgraphs. Additionally, to enable efficient performance tuning on complicated subgraphs, we devise a novel divide-and-conquer tuning mechanism to orchestrate different system components. Through extensive experiments on various neural networks and mobile devices, we show that our system can improve the inference performance by up to 3.3x when compared with state-of-the-art deep compilers.
translated by 谷歌翻译
In this paper, we allocate IoT devices as resources for smart services with time-constrained resource requirements. The allocation method named as BRAD can work under multiple resource scenarios with diverse resource richnesses, availabilities and costs, such as the intelligent healthcare system deployed by Harbin Institute of Technology (HIT-IHC). The allocation aims for bimetric-balancing under the multi-scenario case, i.e., the profit and cost associated with service satisfaction are jointly optimised and balanced wisely. Besides, we abstract IoT devices as digital objects (DO) to make them easier to interact with during resource allocation. Considering that the problem is NP-Hard and the optimisation objective is not differentiable, we utilise Grey Wolf Optimisation (GWO) algorithm as the model optimiser. Specifically, we tackle the deficiencies of GWO and significantly improve its performance by introducing three new mechanisms to form the BRAD-GWA algorithm. Comprehensive experiments are conducted on realistic HIT-IHC IoT testbeds and several algorithms are compared, including the allocation method originally used by HIT-IHC system to verify the effectiveness of the BRAD-GWA. The BRAD-GWA achieves a 3.14 times and 29.6% objective reduction compared with the HIT-IHC and the original GWO algorithm, respectively.
translated by 谷歌翻译
Large-scale diffusion neural networks represent a substantial milestone in text-to-image generation, but they remain poorly understood, lacking interpretability analyses. In this paper, we perform a text-image attribution analysis on Stable Diffusion, a recently open-sourced model. To produce pixel-level attribution maps, we upscale and aggregate cross-attention word-pixel scores in the denoising subnetwork, naming our method DAAM. We evaluate its correctness by testing its semantic segmentation ability on nouns, as well as its generalized attribution quality on all parts of speech, rated by humans. We then apply DAAM to study the role of syntax in the pixel space, characterizing head--dependent heat map interaction patterns for ten common dependency relations. Finally, we study several semantic phenomena using DAAM, with a focus on feature entanglement, where we find that cohyponyms worsen generation quality and descriptive adjectives attend too broadly. To our knowledge, we are the first to interpret large diffusion models from a visuolinguistic perspective, which enables future lines of research. Our code is at https://github.com/castorini/daam.
translated by 谷歌翻译
存在多种自然语言处理(NLP)任务的多种效率方法,例如修剪,蒸馏,动态推断,量化等。我们可以将效率方法视为应用于模型的操作员。自然,我们可以构建多个效率方法的管道,即,顺序将多个操作员应用于模型。在本文中,我们研究了这一想法的合理性,更重要的是,效率运营商的合理性和累积性。我们做出了两个有趣的观察结果:(1)效率运营商是可交换的 - 管道中效率方法的顺序对最终结果几乎没有影响;(2)效率运算符也是累积的 - 结合几种效率方法的最终结果可以通过组合单个方法的结果来估算。这些观察结果加深了我们对效率运营商的理解,并为其现实世界应用提供了有用的准则。
translated by 谷歌翻译
帮助最终用户理解抽象分发的变化可以极大地促进AI部署。在此激励的情况下,我们提出了一项新颖的任务,数据集说明。给定两个图像数据集,数据集的说明旨在自然用自然语言指出其数据集级别的分布。当前用于监视分配变化的技术提供了不足的信息来了解数据集,以提高数据质量。因此,我们介绍了GSCLIP,这是一个无培训的框架来解决数据集说明任务。在GSCLIP中,我们将选择器作为第一种定量评估方法,以识别适当总结数据集偏移的解释。此外,我们利用该选择器来证明基于语言模型生成的发电机的优势。对自然数据转移的系统评估验证了GSCLIP(混合发电机组的组合系统和有效的选择器的组合系统不仅易于使用,而且对于数据集的说明也很强大。
translated by 谷歌翻译
预计机器学习算法的大多数实际问题都可以通过1)未知数据分配来解决这种情况; 2)小领域特定知识; 3)注释有限的数据集。我们通过使用潜在变量(NPC-LV)的压缩提出非参数学习,这是任何数据集的学习框架,这些数据集具有丰富的未标记数据,但很少有标签的数据。通过仅以无监督的方式训练生成模型,该框架利用数据分配来构建压缩机。使用源自Kolmogorov复杂性的基于压缩机的距离度量,加上很少的标记数据,NPC-LV无需进一步的训练而进行分类。我们表明,在低数据制度中,NPC-LV在图像分类的所有三个数据集上都优于监督方法,甚至超过了CIFAR-10上的半监督学习方法。我们证明了如何以及何时使用负面证据下降(Nelbo)作为分类的近似压缩长度。通过揭示压缩率和分类精度之间的相关性,我们说明在NPC-LV下,生成模型的改进可以增强下游分类精度。
translated by 谷歌翻译
作为谈论脸生成的关键组成部分,唇部运动产生决定了所产生的谈话脸视频的自然度和相干性。前文学主要侧重于语音到唇部生成,而文本到唇(T2L)生成缺乏缺乏。 T2L是一个具有挑战性的任务,现有的端到端工作取决于注意机制和自回归(AR)解码方式。然而,AR解码方式产生在先前生成的帧上的当前唇框,其固有地阻碍推广速度,并且对由于误差传播引起的产生唇框的质量有不利影响。这鼓励了并行T2L代的研究。在这项工作中,我们提出了一种用于快速和高保真文本到唇部生成(Paralip)的平行解码模型。具体地,我们预测编码语言特征的持续时间和模型在编码的语言特征上调节的目标唇框,其持续时间以非自动增加方式。此外,我们纳入了结构相似性指数损失和对抗性学习,以提高产生的唇框的感知质量,并减轻模糊预测问题。在网格和TCD-TIMIT数据集上进行的广泛实验证明了所提出的方法的优越性。视频样本可通过\ URL {https://paralip.github.io/}获得。
translated by 谷歌翻译